Skip to content
This repository has been archived by the owner on Feb 6, 2025. It is now read-only.

[WIP] update suggestions for add features in aws mutlti zone support #1325

Open
wants to merge 2,122 commits into
base: master
Choose a base branch
from

Conversation

cclhsu
Copy link
Contributor

@cclhsu cclhsu commented Aug 17, 2020

update suggestions from code review for add features in aws mutlti zone support

Signed-off-by: cclhsu [email protected]

Why is this PR needed?

Does it fix an issue? addresses a business case?

add a description and link to the issue if one exists.

Fixes #

Reminder: Add the "fixes bsc#XXXX" to the title of the commit so that it will
appear in the changelog.

What does this PR do?

please include a brief "management" technical overview (details are in the code)

Anything else a reviewer needs to know?

Special test cases, manual steps, links to resources or anything else that could be helpful to the reviewer.

Info for QA

This is info for QA so that they can validate this. This is mandatory if this PR fixes a bug.
If this is a new feature, a good description in "What does this PR do" may be enough.

Related info

Info that can be relevant for QA:

  • link to other PRs that should be merged together
  • link to packages that should be released together
  • upstream issues

Status BEFORE applying the patch

How can we reproduce the issue? How can we see this issue? Please provide the steps and the prove
this issue is not fixed.

Status AFTER applying the patch

How can we validate this issue is fixed? Please provide the steps and the prove this issue is fixed.

Docs

If docs need to be updated, please add a link to a PR to https://github.com/SUSE/doc-caasp.
At the time of creating the issue, this PR can be work in progress (set its title to [WIP]),
but the documentation needs to be finalized before the PR can be merged.

Merge restrictions

(Please do not edit this)

We are in v4-maintenance phase, so we will restrict what can be merged to prevent unexpected surprises:

What can be merged (merge criteria):
    2 approvals:
        1 developer: code is fine
        1 QA: QA is fine
    there is a PR for updating documentation (or a statement that this is not needed)

pablochacin and others added 30 commits June 2, 2020 17:16
Run daily jobs for multiple versions, each with a different
source branch. Also select the worker type based on the branch.

Signed-off-by: Pablo Chacin <[email protected]>
Use double quotes in worker label to allow for variable
substitution.

Signed-off-by: Pablo Chacin <[email protected]>
…titution

Fix quotes in label to allow variable substitution
Signed-off-by: Vicente Zepeda Mas <[email protected]>
Signed-off-by: Vicente Zepeda Mas <[email protected]>
Signed-off-by: Vicente Zepeda Mas <[email protected]>
Signed-off-by: Vicente Zepeda Mas <[email protected]>
Kubernetes 1.17 requires go version 1.13[1]. This change updates the
makefile version check and the package spec file template to require go
1.13 and updates the module list with `go mod tidy`. This change will
require an update to the Jenkins workers to use 1.13.

[1] https://kubernetes.io/docs/setup/release/notes/#dependencies
If ci-repo label is specified, set the BRANCH_REPO environment
variable to add this repo branch to the cluster nodes.

Signed-off-by: Pablo Chacin <[email protected]>
Separate jobs in different projects depending on the version
of CaaSP to facilitate adapting their configuration independently.

Signed-off-by: Pablo Chacin <[email protected]>
Following the convention that master and feature branches
will use the default integration worker type, while the
experimental-feature and maintenance-version branches will use
the corresponding worker types.

Signed-off-by: Pablo Chacin <[email protected]>
Make v4 run from maintenance-caasp-v4 and v5 from master

Signed-off-by: Pablo Chacin <[email protected]>
…t-brach

Select worker type based on target branch for PR
We'd like to test the v5 versions of the k8s images.
Adapt terraform tfvars to start using SLE_15_SP2 repositories and CaaSP v5 devel projects
AutoYaST adapted for CaaSP 5 based on SLE-15-SP2
Collects logs from provisioned nodes including cloud-init and pods.

Signed-off-by: Pablo Chacin <[email protected]>
Fixes: f3d2d0b ("Update to Kubernetes v1.18.2")

Signed-off-by: Michal Rostecki <[email protected]>
versions: Fix Cilium version for k8s v1.18.2
Fixes: f3d2d0b ("Update to Kubernetes v1.18.2")

Signed-off-by: Michal Rostecki <[email protected]>

Co-authored-by: Michal Rostecki <[email protected]>
The files are supposed to be in /etc/crio/crio.conf.d/ not
/etc/crio/conf.d/. Also cleanup wording a bit.
Marek Counts and others added 18 commits August 3, 2020 12:39
Updated versions to v4.5

includes packages and ci
This reverts commit:
  * 5757a23
  * 48f33d6
  * 925f938

In 4.2.2 we have reverted k8s to 1.17 prior releasing. So we need to
backport this change to master as well.

Signed-off-by: Jordi Massaguer Pla <[email protected]>
Signed-off-by: Manuel Buil <[email protected]>
Fixes CVE-2020-8663, CVE-2020-12603, CVE-2020-12604, CVE-2020-12605
in cilium.
… authority (SUSE#1262)

* Signing OIDC server cert by custom OIDC CA if present
* New command: skuba cert generate-csr

Signed-off-by: JenTing Hsiao <[email protected]>
When removing the package zypper will install
freely any version it finds more suitable, if there is a new one
it will install that one.
After the PR are merged into the target branch  before testing.
Therefore running the PR tests agains master/release branches is
redundant.

Signed-off-by: Pablo Chacin <[email protected]>
Revert k8s versin to 1.17.4 as "previous" version for
maintenance branch in the upgrade tests

Signed-off-by: Pablo Chacin <[email protected]>
Update to cilium 1.7.6 (bsc#1173559)
Lock kured after disabling skuba-update.timer
…ance

Fix K8s version in upgrade tests for maintenance
update suggestions from code review for add features in aws mutlti zone support

Signed-off-by: cclhsu <[email protected]>
@cclhsu cclhsu self-assigned this Aug 17, 2020
Copy link

@Klaven Klaven left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, Looked over the PR a little bit this morning, had a few quick questions regarding it.

  1. could you fill out the PR template a bit more? While i'm a bit fan of "the code should speak for itself" a lot of the time the code does not give a good "why" and that would be helpful!

  2. the PR is quite large :D anything you could do to slim it down would make it easier to review

  3. generally, I don't think the API server should be exposed externally unless someone absolutely needs it... and lets be honest, no one actually needs it. A large part of the security CVE's can be mitigated by keeping the control plane safe.

description = "etcd"
}

# api-server - everywhere
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, Api server should be internal only as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

var.cidr_block is internal.

cidr_blocks = ["0.0.0.0/0"]
}

ingress {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dannysauer Correct me if i'm wrong but I don't think we should expose the API server too the web. this is an easy first step to securing them, making it so that only inside the cluster can directly hit the api server. I would be for removing this block.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not completely clear why we open 80/443 to begin with; nothing listens on those ports by default, right? Was that just a copy-paste from an example that was left behind all along?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, yes, exposing the api server seems undesirable. I guess it's possible to control access to the whole vpc and that might be an acceptable control, but I honestly haven't read the entire config to see if that's being done. It still seems like it'd be preferable to elect a source IP as "management node" or identify a management network.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is from original code. I just try to rearrange code by function. I will test to remove 80/443 to see if any side effect

@cclhsu cclhsu changed the title update suggestions for add features in aws mutlti zone support [WIP] update suggestions for add features in aws mutlti zone support Aug 17, 2020
@cclhsu
Copy link
Contributor Author

cclhsu commented Aug 17, 2020

Hey, Looked over the PR a little bit this morning, had a few quick questions regarding it.

  1. could you fill out the PR template a bit more? While i'm a bit fan of "the code should speak for itself" a lot of the time the code does not give a good "why" and that would be helpful!
  2. the PR is quite large :D anything you could do to slim it down would make it easier to review
  3. generally, I don't think the API server should be exposed externally unless someone absolutely needs it... and lets be honest, no one actually needs it. A large part of the security CVE's can be mitigated by keeping the control plane safe.

Ok, I will slim PR down to focus just multiple zone. I am add too many formatting change and little customized improvemnt here and there.

Copy link

@Klaven Klaven left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my point of view this looks fine. I don't know enough about our AWS deployment to say though. @flavio would be a good reviewer.

@Klaven Klaven requested a review from flavio August 26, 2020 14:47
Copy link
Contributor

@dannysauer dannysauer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm in the same boat as @Klaven ; this looks fine to me as a cleanup, but I don't have enough "AWS via Terraform" background to say it's good to go.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.